Rejoinder : One - Step Sparse Estimates in Nonconcave Penalized Likelihood Models
نویسندگان
چکیده
We would like to take this opportunity to thank the discussants for their thoughtful comments and encouragements on our work. The discussants raised a number of issues from theoretical as well as computational perspectives. Our rejoinder will try to provide some insights into these issues and address specific questions asked by the discussants. Most traditional variable selection criteria, such as the AIC and the BIC, are (or are asymptotically equivalent to) the penalized likelihood with the L 0 penalty, namely, p λ (|β|) = 1 2 λ 2 I(|β| = 0), and with appropriate values of λ (Fan and Li [7]). In general, the optimization of the L 0-penalized likelihood function via exhaustive search over all subset models is an NP-hard computational problem. Donoho and Huo [3] and Donoho and Elad [2] show that, under some conditions, the solution to the L 0-penalized problem can be found by solving a convex optimization problem of minimizing the L 1-norm of the coefficients, when the solution is sufficiently sparse. In other words, the NP-hard best subset variable selection can be solved by efficient convex optimization algorithms under the sparsity assumption. This sheds light on variable selection for high-dimensional models, and motivates us to use continuous penalties, such as the L 1 penalty, rather than the discontinu-ous penalties, including the L 0 penalty. The penalized likelihood procedure with the L 1 penalty coincides with LASSO. In the same spirit of LASSO, the penalized likelihood with a nonconcave penalty, such as the SCAD penalty, has been proposed for variable selection in Fan and Li [5]. LASSO and the SCAD represent the two main streams of penalization method for variable selection in the recent literature. Although both methods operate as continuous thresholding rules, they appear to be very different theoretically and
منابع مشابه
One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.
Fan & Li (2001) propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective function is nondifferentiable and nonconcave. In this article we propose a new unified algorithm ...
متن کاملDiscussion: One-step Sparse Estimates in Nonconcave Penalized Likelihood Models: Who Cares if It Is a White cat or a Black cat?
متن کامل
Discussion: One-step Sparse Estimates in Nonconcave Penalized Likelihood Models: Who Cares if It Is a White cat or a Black cat?
متن کامل
Rejoinder: One-step Sparse Estimates in Nonconcave Penalized Likelihood Models By
Most traditional variable selection criteria, such as the AIC and the BIC, are (or are asymptotically equivalent to) the penalized likelihood with the L0 penalty, namely, pλ(|β|) = 2λI (|β| = 0), and with appropriate values of λ (Fan and Li [7]). In general, the optimization of the L0-penalized likelihood function via exhaustive search over all subset models is an NP-hard computational problem....
متن کاملDiscussion of “ One - step sparse estimates in nonconcave penalized likelihood models ( H . Zou and R . Li ) ”
Hui Zou and Runze Li ought to be congratulated for their nice and interesting work which presents a variety of ideas and insights in statistical methodology, computing and asymptotics. We agree with them that oneor even multi-step (or -stage) procedures are currently among the best for analyzing complex data-sets. The focus of our discussion is mainly on high-dimensional problems where p n: we ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007